32 research outputs found

    UAV flight coordination for communication networks:Genetic algorithms versus game theory

    Get PDF
    The autonomous coordinated flying for groups of unmanned aerial vehicles that maximise network coverage to mobile ground-based units by efficiently utilising the available on-board power is a complex problem. Their coordination involves the fulfilment of multiple objectives that are directly dependent on dynamic, unpredictable and uncontrollable phenomena. In this paper, two systems are presented and compared based on their ability to reposition fixed-wing unmanned aerial vehicles to maintain a useful airborne wireless network topology. Genetic algorithms and non-cooperative games are employed for the generation of optimal flying solutions. The two methods consider realistic kinematics for hydrocarbon-powered medium-altitude, long-endurance aircrafts. Coupled with a communication model that addresses environmental conditions, they optimise flying to maximising the number of supported ground-based units. Results of large-scale scenarios highlight the ability of genetic algorithms to evolve flexible sets of manoeuvres that keep the flying vehicles separated and provide optimal solutions over shorter settling times. In comparison, game theory is found to identify strategies of predefined manoeuvres that maximise coverage but require more time to converge

    Discovering Schema-based Action Sequences through Play in Situated Humanoid Robots

    Get PDF
    Exercising sensorimotor and cognitive functions allows humans, including infants, to interact with the environment and objects within it. In particular, during everyday activities, infants continuously enrich their repertoire of actions, and by playing, they experimentally plan such actions in sequences to achieve desired goals. The latter, reflected as perceptual target states, are built on previously acquired experiences shaped by infants to predict their actions. Imitating this, in developmental robotics, we seek methods that allow autonomous embodied agents with no prior knowledge to acquire information about the environment. Like infants, robots that actively explore the surroundings and manipulate proximate objects are capable of learning. Their understanding of the environment develops through the discovery of actions and their association with the resulting perceptions in the world. We extend the development of Dev-PSchema, a schema-based, open-ended learning system, and examine the infant-like discovery process of new generalised skills while engaging with objects in free-play using an iCub robot. Our experiments demonstrate the capability of Dev-PSchema to utilise the newly discovered skills to solve user-defined goals beyond its past experiences. The robot can generate and evaluate sequences of interdependent high-level actions to form potential solutions and ultimately solve complex problems towards tool-use

    Developing Hierarchical Schemas and Building Schema Chains Through Practice Play Behavior

    Get PDF
    Examining the different stages of learning through play in humans during early life has been a topic of interest for various scholars. Play evolves from practice to symbolic and then later to play with rules. During practice play, infants go through a process of developing knowledge while they interact with the surrounding objects, facilitating the creation of new knowledge about objects and object related behaviors. Such knowledge is used to form schemas in which the manifestation of sensorimotor experiences is captured. Through subsequent play, certain schemas are further combined to generate chains able to achieve behaviors that require multiple steps. The chains of schemas demonstrate the formation of higher level actions in a hierarchical structure. In this work we present a schema-based play generator for artificial agents, termed Dev-PSchema. With the help of experiments in a simulated environment and with the iCub robot, we demonstrate the ability of our system to create schemas of sensorimotor experiences from playful interaction with the environment. We show the creation of schema chains consisting of a sequence of actions that allow an agent to autonomously perform complex tasks. In addition to demonstrating the ability to learn through playful behavior, we demonstrate the capability of Dev-PSchema to simulate different infants with different preferences toward novel vs. familiar objects
    corecore